188 research outputs found
Exploring explanations for matrix factorization recommender systems (Position Paper)
In this paper we address the problem of finding explanations for collaborative filtering algorithms that use matrix factorization methods. We look for explanations that increase the transparency of the system. To do so, we propose two measures. First, we show a model that describes the contribution of each previous rating given by a user to the generated recommendation. Second, we measure then influence of changing each previous rating of a user on the outcome of the recommender system. We show that under the assumption that there are many more users in the system than there are items, we can efficiently generate each type of explanation by using linear approximations of the recommender system’s behavior for each user, and computing partial derivatives of predicted ratings with respect to each user’s provided ratings.http://scholarworks.boisestate.edu/fatrec/2017/1/7/Published versio
Quantifying Information Overload in Social Media and its Impact on Social Contagions
Information overload has become an ubiquitous problem in modern society.
Social media users and microbloggers receive an endless flow of information,
often at a rate far higher than their cognitive abilities to process the
information. In this paper, we conduct a large scale quantitative study of
information overload and evaluate its impact on information dissemination in
the Twitter social media site. We model social media users as information
processing systems that queue incoming information according to some policies,
process information from the queue at some unknown rates and decide to forward
some of the incoming information to other users. We show how timestamped data
about tweets received and forwarded by users can be used to uncover key
properties of their queueing policies and estimate their information processing
rates and limits. Such an understanding of users' information processing
behaviors allows us to infer whether and to what extent users suffer from
information overload.
Our analysis provides empirical evidence of information processing limits for
social media users and the prevalence of information overloading. The most
active and popular social media users are often the ones that are overloaded.
Moreover, we find that the rate at which users receive information impacts
their processing behavior, including how they prioritize information from
different sources, how much information they process, and how quickly they
process information. Finally, the susceptibility of a social media user to
social contagions depends crucially on the rate at which she receives
information. An exposure to a piece of information, be it an idea, a convention
or a product, is much less effective for users that receive information at
higher rates, meaning they need more exposures to adopt a particular contagion.Comment: To appear at ICSWM '1
iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
People are rated and ranked, towards algorithmic decision making in an
increasing number of applications, typically based on machine learning.
Research on how to incorporate fairness into such tasks has prevalently pursued
the paradigm of group fairness: giving adequate success rates to specifically
protected groups. In contrast, the alternative paradigm of individual fairness
has received relatively little attention, and this paper advances this less
explored direction. The paper introduces a method for probabilistically mapping
user records into a low-rank representation that reconciles individual fairness
and the utility of classifiers and rankings in downstream applications. Our
notion of individual fairness requires that users who are similar in all
task-relevant attributes such as job qualification, and disregarding all
potentially discriminating attributes such as gender, should have similar
outcomes. We demonstrate the versatility of our method by applying it to
classification and learning-to-rank tasks on a variety of real-world datasets.
Our experiments show substantial improvements over the best prior work for this
setting.Comment: Accepted at ICDE 2019. Please cite the ICDE 2019 proceedings versio
Index Coding: Rank-Invariant Extensions
An index coding (IC) problem consisting of a server and multiple receivers
with different side-information and demand sets can be equivalently represented
using a fitting matrix. A scalar linear index code to a given IC problem is a
matrix representing the transmitted linear combinations of the message symbols.
The length of an index code is then the number of transmissions (or
equivalently, the number of rows in the index code). An IC problem is called an extension of another IC problem if the
fitting matrix of is a submatrix of the fitting matrix of . We first present a straightforward \textit{-order} extension
of an IC problem for which an index code is
obtained by concatenating copies of an index code of . The length
of the codes is the same for both and , and if the
index code for has optimal length then so does the extended code for
. More generally, an extended IC problem of having
the same optimal length as is said to be a \textit{rank-invariant}
extension of . We then focus on -order rank-invariant extensions
of , and present constructions of such extensions based on involutory
permutation matrices
Optimal Index Codes via a Duality between Index Coding and Network Coding
In Index Coding, the goal is to use a broadcast channel as efficiently as
possible to communicate information from a source to multiple receivers which
can possess some of the information symbols at the source as side-information.
In this work, we present a duality relationship between index coding (IC) and
multiple-unicast network coding (NC). It is known that the IC problem can be
represented using a side-information graph (with number of vertices
equal to the number of source symbols). The size of the maximum acyclic induced
subgraph, denoted by is a lower bound on the \textit{broadcast rate}.
For IC problems with and , prior work has shown that
binary (over ) linear index codes achieve the lower bound
for the broadcast rate and thus are optimal. In this work, we use the the
duality relationship between NC and IC to show that for a class of IC problems
with , binary linear index codes achieve the lower bound on
the broadcast rate. In contrast, it is known that there exists IC problems with
and optimal broadcast rate strictly greater than
Equity of Attention: Amortizing Individual Fairness in Rankings
Rankings of people and items are at the heart of selection-making,
match-making, and recommender systems, ranging from employment sites to sharing
economy platforms. As ranking positions influence the amount of attention the
ranked subjects receive, biases in rankings can lead to unfair distribution of
opportunities and resources, such as jobs or income.
This paper proposes new measures and mechanisms to quantify and mitigate
unfairness from a bias inherent to all rankings, namely, the position bias,
which leads to disproportionately less attention being paid to low-ranked
subjects. Our approach differs from recent fair ranking approaches in two
important ways. First, existing works measure unfairness at the level of
subject groups while our measures capture unfairness at the level of individual
subjects, and as such subsume group unfairness. Second, as no single ranking
can achieve individual attention fairness, we propose a novel mechanism that
achieves amortized fairness, where attention accumulated across a series of
rankings is proportional to accumulated relevance.
We formulate the challenge of achieving amortized individual fairness subject
to constraints on ranking quality as an online optimization problem and show
that it can be solved as an integer linear program. Our experimental evaluation
reveals that unfair attention distribution in rankings can be substantial, and
demonstrates that our method can improve individual fairness while retaining
high ranking quality.Comment: Accepted to SIGIR 201
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity
We map the recently proposed notions of algorithmic fairness to economic
models of Equality of opportunity (EOP)---an extensively studied ideal of
fairness in political philosophy. We formally show that through our conceptual
mapping, many existing definition of algorithmic fairness, such as predictive
value parity and equality of odds, can be interpreted as special cases of EOP.
In this respect, our work serves as a unifying moral framework for
understanding existing notions of algorithmic fairness. Most importantly, this
framework allows us to explicitly spell out the moral assumptions underlying
each notion of fairness, and interpret recent fairness impossibility results in
a new light. Last but not least and inspired by luck egalitarian models of EOP,
we propose a new family of measures for algorithmic fairness. We illustrate our
proposal empirically and show that employing a measure of algorithmic
(un)fairness when its underlying moral assumptions are not satisfied, can have
devastating consequences for the disadvantaged group's welfare
Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.Comment: Conference: Thirty-second Conference on Neural Information Processing
Systems (NIPS 2018
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
As algorithms are increasingly used to make important decisions that affect
human lives, ranging from social benefit assignment to predicting risk of
criminal recidivism, concerns have been raised about the fairness of
algorithmic decision making. Most prior works on algorithmic fairness
normatively prescribe how fair decisions ought to be made. In contrast, here,
we descriptively survey users for how they perceive and reason about fairness
in algorithmic decision making.
A key contribution of this work is the framework we propose to understand why
people perceive certain features as fair or unfair to be used in algorithms.
Our framework identifies eight properties of features, such as relevance,
volitionality and reliability, as latent considerations that inform people's
moral judgments about the fairness of feature use in decision-making
algorithms. We validate our framework through a series of scenario-based
surveys with 576 people. We find that, based on a person's assessment of the
eight latent properties of a feature in our exemplar scenario, we can
accurately (> 85%) predict if the person will judge the use of the feature as
fair.
Our findings have important implications. At a high-level, we show that
people's unfairness concerns are multi-dimensional and argue that future
studies need to address unfairness concerns beyond discrimination. At a
low-level, we find considerable disagreements in people's fairness judgments.
We identify root causes of the disagreements, and note possible pathways to
resolve them.Comment: To appear in the Proceedings of the Web Conference (WWW 2018). Code
available at https://fate-computing.mpi-sws.org/procedural_fairness
Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations
To help their users to discover important items at a particular time, major
websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K
recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most
Viewed News Stories), which rely on crowdsourced popularity signals to select
the items. However, different sections of a crowd may have different
preferences, and there is a large silent majority who do not explicitly express
their opinion. Also, the crowd often consists of actors like bots, spammers, or
people running orchestrated campaigns. Recommendation algorithms today largely
do not consider such nuances, hence are vulnerable to strategic manipulation by
small but hyper-active user groups.
To fairly aggregate the preferences of all users while recommending top-K
items, we borrow ideas from prior research on social choice theory, and
identify a voting mechanism called Single Transferable Vote (STV) as having
many of the fairness properties we desire in top-K item (s)elections. We
develop an innovative mechanism to attribute preferences of silent majority
which also make STV completely operational. We show the generalizability of our
approach by implementing it on two different real-world datasets. Through
extensive experimentation and comparison with state-of-the-art techniques, we
show that our proposed approach provides maximum user satisfaction, and cuts
down drastically on items disliked by most but hyper-actively promoted by a few
users.Comment: In the proceedings of the Conference on Fairness, Accountability, and
Transparency (FAT* '19). Please cite the conference versio
- …